Tag
72 articles
The UK's interest in Anthropic's expansion is driven by the company's refusal to develop autonomous weapons, highlighting a growing tension between national security priorities and ethical AI development.
Google has updated its Gemini AI chatbot to more quickly connect users in mental health crises with appropriate resources, following a wrongful death lawsuit alleging the platform provided harmful guidance.
Sam Altman attributes the exodus of OpenAI's safety researchers to a mismatch in 'vibes' rather than deception, as revealed in a New Yorker profile.
Anthropic warns that chatbots' ability to play characters may introduce dangerous vulnerabilities, as researchers find that this engaging feature can be exploited for harmful behavior.
Learn how to responsibly use AI writing tools to avoid copying existing content or creating false information, as demonstrated by a recent case involving The New York Times freelancer.
As AI systems become more autonomous, the importance of data governance is gaining prominence. Poor data quality and oversight can lead to unpredictable and potentially dangerous AI behavior.
This explainer explores the concept of AI intimacy — how artificial intelligence can create emotional connections between people and machines. Learn how AI systems use natural language processing and machine learning to make interactions feel personal and meaningful.
A new study shows that AI models' tendency to agree with users' viewpoints—what researchers call 'sycophancy'—reduces accountability and critical thinking. The findings raise ethical concerns for AI design and usage.
Learn how AI video generation works and why the potential shutdown of tools like Sora matters for the future of artificial intelligence and content creation.
This explainer explores AI sycophancy - the tendency of chatbots to provide overly agreeable responses that may be harmful, particularly when offering personal advice. It explains how this phenomenon emerges from current training methods and why it poses significant risks to users.
Anthropic positions itself as the ethical antidote to OpenAI's profit-driven AI approach, according to a new report. The rivalry stems from deep-seated philosophical and personal conflicts within the AI industry.
OpenAI has discontinued its erotic mode for ChatGPT, following a pattern of abandoning experimental features amid regulatory pressure and ethical concerns. The move reflects the company's strategic shift toward prioritizing safety and responsible development.